49 research outputs found

    Baryon polarization in low-energy unpolarized meson-baryon scattering

    Full text link
    We compute the polarization of the final-state baryon, in its rest frame, in low-energy meson--baryon scattering with unpolarized initial state, in Unitarized BChPT. Free parameters are determined by fitting total and differential cross-section data (and spin-asymmetry or polarization data if available) for pKpK^-, pK+pK^+ and pπ+p\pi^+ scattering. We also compare our results with those of leading-order BChPT

    A Framework for Sequential Planning in Multi-Agent Settings

    Full text link
    This paper extends the framework of partially observable Markov decision processes (POMDPs) to multi-agent settings by incorporating the notion of agent models into the state space. Agents maintain beliefs over physical states of the environment and over models of other agents, and they use Bayesian updates to maintain their beliefs over time. The solutions map belief states to actions. Models of other agents may include their belief states and are related to agent types considered in games of incomplete information. We express the agents autonomy by postulating that their models are not directly manipulable or observable by other agents. We show that important properties of POMDPs, such as convergence of value iteration, the rate of convergence, and piece-wise linearity and convexity of the value functions carry over to our framework. Our approach complements a more traditional approach to interactive settings which uses Nash equilibria as a solution paradigm. We seek to avoid some of the drawbacks of equilibria which may be non-unique and do not capture off-equilibrium behaviors. We do so at the cost of having to represent, process and continuously revise models of other agents. Since the agents beliefs may be arbitrarily nested, the optimal solutions to decision making problems are only asymptotically computable. However, approximate belief updates and approximately optimal plans are computable. We illustrate our framework using a simple application domain, and we show examples of belief updates and value functions

    Graphical models for interactive POMDPs: representations and solutions

    Get PDF
    We develop new graphical representations for the problem of sequential decision making in partially observable multiagent environments, as formalized by interactive partially observable Markov decision processes (I-POMDPs). The graphical models called interactive inf uence diagrams (I-IDs) and their dynamic counterparts, interactive dynamic inf uence diagrams (I-DIDs), seek to explicitly model the structure that is often present in real-world problems by decomposing the situation into chance and decision variables, and the dependencies between the variables. I-DIDs generalize DIDs, which may be viewed as graphical representations of POMDPs, to multiagent settings in the same way that IPOMDPs generalize POMDPs. I-DIDs may be used to compute the policy of an agent given its belief as the agent acts and observes in a setting that is populated by other interacting agents. Using several examples, we show how I-IDs and I-DIDs may be applied and demonstrate their usefulness. We also show how the models may be solved using the standard algorithms that are applicable to DIDs. Solving I-DIDs exactly involves knowing the solutions of possible models of the other agents. The space of models grows exponentially with the number of time steps. We present a method of solving I-DIDs approximately by limiting the number of other agents’ candidate models at each time step to a constant. We do this by clustering models that are likely to be behaviorally equivalent and selecting a representative set from the clusters. We discuss the error bound of the approximation technique and demonstrate its empirical performance

    Game theory of mind

    Get PDF
    This paper introduces a model of ‘theory of mind’, namely, how we represent the intentions and goals of others to optimise our mutual interactions. We draw on ideas from optimum control and game theory to provide a ‘game theory of mind’. First, we consider the representations of goals in terms of value functions that are prescribed by utility or rewards. Critically, the joint value functions and ensuing behaviour are optimised recursively, under the assumption that I represent your value function, your representation of mine, your representation of my representation of yours, and so on ad infinitum. However, if we assume that the degree of recursion is bounded, then players need to estimate the opponent's degree of recursion (i.e., sophistication) to respond optimally. This induces a problem of inferring the opponent's sophistication, given behavioural exchanges. We show it is possible to deduce whether players make inferences about each other and quantify their sophistication on the basis of choices in sequential games. This rests on comparing generative models of choices with, and without, inference. Model comparison is demonstrated using simulated and real data from a ‘stag-hunt’. Finally, we note that exactly the same sophisticated behaviour can be achieved by optimising the utility function itself (through prosocial utility), producing unsophisticated but apparently altruistic agents. This may be relevant ethologically in hierarchal game theory and coevolution

    Can bounded and self-interested agents be teammates? Application to planning in ad hoc teams

    Get PDF
    Planning for ad hoc teamwork is challenging because it involves agents collaborating without any prior coordination or communication. The focus is on principled methods for a single agent to cooperate with others. This motivates investigating the ad hoc teamwork problem in the context of self-interested decision-making frameworks. Agents engaged in individual decision making in multiagent settings face the task of having to reason about other agents’ actions, which may in turn involve reasoning about others. An established approximation that operationalizes this approach is to bound the infinite nesting from below by introducing level 0 models. For the purposes of this study, individual, self-interested decision making in multiagent settings is modeled using interactive dynamic influence diagrams (I-DID). These are graphical models with the benefit that they naturally offer a factored representation of the problem, allowing agents to ascribe dynamic models to others and reason about them. We demonstrate that an implication of bounded, finitely-nested reasoning by a self-interested agent is that we may not obtain optimal team solutions in cooperative settings, if it is part of a team. We address this limitation by including models at level 0 whose solutions involve reinforcement learning. We show how the learning is integrated into planning in the context of I-DIDs. This facilitates optimal teammate behavior, and we demonstrate its applicability to ad hoc teamwork on several problem domains and configurations

    Arguing with behavior influence: A model for web-based group decision support systems

    Get PDF
    In this work, we propose an argumentation-based dialogue model designed for Web-based Group Decision Support Systems, that considers the decision-makers' intentions. The intentions are modeled as behavior styles which allow agents to interact with each other as humans would in face-to-face meetings. In addition, we propose a set of arguments that can be used by the agents to perform and evaluate requests, while considering the agents' behavior style. The inclusion of decision-makers' intentions intends to create a more reliable and realistic process. Our model proved, in different contexts, that higher levels of consensus and satisfaction are achieved when using agents modeled with behavior styles compared to agents without any features to represent the decision-makers' intentions.- (undefined

    A flexible coupling approach to multi-agent planning under incomplete information

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/s10115-012-0569-7Multi-agent planning (MAP) approaches are typically oriented at solving loosely coupled problems, being ineffective to deal with more complex, strongly related problems. In most cases, agents work under complete information, building complete knowledge bases. The present article introduces a general-purpose MAP framework designed to tackle problems of any coupling levels under incomplete information. Agents in our MAP model are partially unaware of the information managed by the rest of agents and share only the critical information that affects other agents, thus maintaining a distributed vision of the task. Agents solve MAP tasks through the adoption of an iterative refinement planning procedure that uses single-agent planning technology. In particular, agents will devise refinements through the partial-order planning paradigm, a flexible framework to build refinement plans leaving unsolved details that will be gradually completed by means of new refinements. Our proposal is supported with the implementation of a fully operative MAP system and we show various experiments when running our system over different types of MAP problems, from the most strongly related to the most loosely coupled.This work has been partly supported by the Spanish MICINN under projects Consolider Ingenio 2010 CSD2007-00022 and TIN2011-27652-C03-01, and the Valencian Prometeo project 2008/051.Torreño Lerma, A.; Onaindia De La Rivaherrera, E.; Sapena Vercher, O. (2014). A flexible coupling approach to multi-agent planning under incomplete information. Knowledge and Information Systems. 38:141-178. https://doi.org/10.1007/s10115-012-0569-7S14117838Argente E, Botti V, Carrascosa C, Giret A, Julian V, Rebollo M (2011) An abstract architecture for virtual organizations: the THOMAS approach. Knowl Inf Syst 29(2):379–403Barrett A, Weld DS (1994) Partial-order planning: evaluating possible efficiency gains. Artif Intell 67(1):71–112Belesiotis A, Rovatsos M, Rahwan I (2010) Agreeing on plans through iterated disputes. In: Proceedings of the 9th international conference on autonomous agents and multiagent systems. pp 765–772Bellifemine F, Poggi A, Rimassa G (2001) JADE: a FIPA2000 compliant agent development environment. In: Proceedings of the 5th international conference on autonomous agents (AAMAS). ACM, pp 216–217Blum A, Furst ML (1997) Fast planning through planning graph analysis. Artif Intell 90(1–2):281–300Boutilier C, Brafman R (2001) Partial-order planning with concurrent interacting actions. J Artif Intell Res 14(105):136Brafman R, Domshlak C (2008) From one to many: planning for loosely coupled multi-agent systems. In: Proceedings of the 18th international conference on automated planning and scheduling (ICAPS). pp 28–35Brenner M, Nebel B (2009) Continual planning and acting in dynamic multiagent environments. J Auton Agents Multiag Syst 19(3):297–331Coles A, Coles A, Fox M, Long D (2010) Forward-chaining partial-order planning. In: Proceedings of the 20th international conference on automated planning and scheduling (ICAPS). pp 42–49Coles A, Fox M, Long D, Smith A (2008) Teaching forward-chaining planning with JavaFF. In: Colloquium on AI education, 23rd AAAI conference on artificial intelligenceCox J, Durfee E, Bartold T (2005) A distributed framework for solving the multiagent plan coordination problem. In: Proceedings of the 4th international joint conference on autonomous agents and multiagent systems (AAMAS). ACM, pp 821–827de Weerdt M, Clement B (2009) Introduction to planning in multiagent systems. Multiag Grid Syst 5(4):345–355Decker K, Lesser VR (1992) Generalizing the partial global planning algorithm. Int J Coop Inf Syst 2(2):319–346desJardins M, Durfee E, Ortiz C, Wolverton M (1999) A survey of research in distributed continual planning. AI Mag 20(4):13–22Doshi P (2007) On the role of interactive epistemology in multiagent planning. In: Artificial intelligence and, pattern recognition. pp 208–213Dréo J, Savéant P, Schoenauer M, Vidal V (2011) Divide-and-evolve: the marriage of descartes and darwin. In: Proceedings of the 7th international planning competition (IPC). Freiburg, GermanyDurfee EH (2001) Distributed problem solving and planning. In: Multi-agents systems and applications: selected tutorial papers from the 9th ECCAI advanced course (ACAI) and agentLink’s third European agent systems summer school (EASSS), vol LNAI 2086. Springer, pp 118–149Durfee EH, Lesser V (1991) Partial global planning: a coordination framework for distributed hypothesis formation. IEEE Trans Syst Man Cybern Special Issue Distrib Sens Netw 21(5):1167–1183Ephrati E, Rosenschein JS (1996) Deriving consensus in multiagent systems. Artif Intell 87(1–2):21–74Fikes R, Nilsson N (1971) STRIPS: a new approach to the application of theorem proving to problem solving. Artif Intell 2(3):189–208Fogués R, Alberola J, Such J, Espinosa A, Garcia-Fornes A (2010) Towards dynamic agent interaction support in open multiagent systems. In: Proceedings of the 2010 conference on artificial intelligence research and development: proceedings of the 13th international conference of the Catalan association for artificial intelligence’. IOS Press, pp 89–98Gerevini A, Long D (2006) Preferences and soft constraints in PDDL3. In: ICAPS workshop on planning with preferences and soft constraints, vol 6. Citeseer, pp 46–53Ghallab M, Howe A, Knoblock C, McDermott D, Ram A, Veloso M, Weld D, Wilkins D (1998) PDDL-the Planning Domain Definition Language. In: AIPS-98 planning committeeGmytrasiewicz P, Doshi P (2005) A framework for sequential planning in multi-agent settings. J Artif Intell Res 24:49–79Haslum P, Jonsson P (1999) Some results on the complexity of planning with incomplete information. In: Proceedings of the 5th European conference on, planning (ECP). pp 308–318Helmert M (2006) The fast downward planning system. J Artif Intell Res 26(1):191–246Hoffmann J, Nebel B (2001) The FF planning system: fast planning generation through heuristic search. J Artif Intell Res 14:253–302Jonsson A, Rovatsos M (2011) Scaling up multiagent planning: a best-response approach. In: Proceedings of the 21st international conference on automated planning and scheduling (ICAPS). AAAI, pp 114–121Kambhampati S (1997) Refinement planning as a unifying framework for plan synthesis. AI Mag 18(2):67–97Kaminka GA, Pynadath DV, Tambe M (2002) Monitoring teams by overhearing: a multi-agent plan-recognition approach. J Artif Intell Res 17:83–135Kone M, Shimazu A, Nakajima T (2000) The state of the art in agent communication languages. Knowl Inf Syst 2(3):259–284Kovacs DL (2011) Complete BNF description of PDDL3.1. Technical reportKraus S (1997) Beliefs, time and incomplete information in multiple encounter negotiations among autonomous agents. Ann Math Artif Intell 20(1–4):111–159Kumar A, Zilberstein S, Toussaint M (2011) Scalable multiagent planning using probabilistic inference. In: Proceedings of the 22nd international joint conference on artificial intelligence (IJCAI)’. Barcelona, Spain, pp 2140–2146Kvarnström J. (2011) Planning for loosely coupled agents using partial order forward-chaining. In: Proceedings of the 21st international conference on automated planning and scheduling (ICAPS). AAAI, pp 138–145Lesser V, Decker K, Wagner T, Carver N, Garvey A, Horling B, Neiman D, Podorozhny R, Prasad M, Raja A et al (2004) Evolution of the GPGP/TAEMS domain-independent coordination framework. Auton Agents Multi Agent Syst 9(1):87–143Lipovetzky N, Geffner H (2011) Searching for plans with carefully designed probes. In: Proceedings of the 21th international conference on automated planning and scheduling (ICAPS)Micacchi C, Cohen R (2008) A framework for simulating real-time multi-agent systems. Knowl Inf Syst 17(2):135–166Nguyen N, Katarzyniak R (2009) Actions and social interactions in multi-agent systems. Knowl Inf Syst 18(2):133–136Nguyen X, Kambhampati S (2001) Reviving partial order planning. In: Proceedings of the 17th international joint conference on artificial intelligence (IJCAI). Morgan Kaufmann, pp 459–464Nissim R, Brafman R, Domshlak C (2010) A general, fully distributed multi-agent planning algorithm. In: Proceedings of the 9th international conference on autonomous agents and multiagent systems (AAMAS). pp 1323–1330Pajares S, Onaindia E (2012) Defeasible argumentation for multi-agent planning in ambient intelligence applications. In: Proceedings of the 11th international conference on autonomous agents and multiagent systems (AAMAS) pp 509–516Paolucci M, Shehory O, Sycara K, Kalp D, Pannu A (2000) A planning component for RETSINA agents. Intelligent Agents VI. Agent Theories Architectures, and Languages pp 147–161Parsons S, Sierra C, Jennings N (1998) Agents that reason and negotiate by arguing. J Logic Comput 8(3):261Penberthy J, Weld D (1992) UCPOP: a sound, complete, partial order planner for ADL. In: Proceedings of the 3rd international conference on principles of knowledge representation and reasoning (KR). Morgan Kaufmann, pp 103–114Richter S, Westphal M (2010) The LAMA planner: guiding cost-based anytime planning with landmarks. J Artif Intell Res 39(1):127–177Sycara K, Pannu A (1998) The RETSINA multiagent system (video session): towards integrating planning, execution and information gathering. In: Proceedings of the 2nd international conference on autonomous agents (Agents). ACM, pp 350–351Tambe M (1997) Towards flexible teamwork. J Artif Intell Res 7:83–124Tang Y, Norman T, Parsons S (2010) A model for integrating dialogue and the execution of joint plans. Argumentation in multi-agent systems, pp 60–78Tonino H, Bos A, de Weerdt M, Witteveen C (2002) Plan coordination by revision in collective agent based systems. Artif Intell 142(2):121–145Van Der Krogt R, De Weerdt M (2005), Plan repair as an extension of planning. In: Proceedings of the 15th international conference on automated planning and scheduling (ICAPS). pp 161–170Weld D (1994) An introduction to least commitment planning. AI Mag 15(4):27Weld D (1999) Recent advances in AI planning. AI Mag 20(2):93–123Wilkins D, Myers K (1998) A multiagent planning architecture. In: Proceedings of the 4th international conference on artificial intelligence planning systems (AIPS), pp 154–162Wu F, Zilberstein S, Chen X (2011) Online planning for multi-agent systems with bounded communication. Artif Intell 175(2):487–511Younes H, Simmons R (2003) VHPOP: versatile heuristic partial order planner. J Artif Intell Res 20: 405–430Zhang J, Nguyen X, Kowalczyk R (2007) Graph-based multi-agent replanning algorithm. In: Proceedings of the 6th conference on autonomous agents and multiagent systems (AAMAS

    Gaining Competitive Advantage Through Learning Agent Models

    No full text

    Tourists on the Move

    No full text
    corecore